713 research outputs found

    Double freeform illumination design for prescribed wavefronts and irradiances

    Full text link
    A mathematical model in terms of partial differential equations (PDE) for the calculation of double freeform surfaces for irradiance and phase control with predefined input and output wavefronts is presented. It extends the results of B\"osel and Gross [J. Opt. Soc. Am. A 34, 1490 (2017)] for the illumination design of single freeform surfaces for zero-\'etendue light sources to double freeform lenses and mirrors. The PDE model thereby overcomes the restriction to paraxiality or the requirement of at least one planar wavefront of the current design models in the literature. In contrast with the single freeform illumination design, the PDE system does not reduce to a Monge-Amp\`ere type equation for the unknown freeform surfaces, if nonplanar input and output wavefronts are assumed. Additionally, a numerical solving strategy for the PDE model is presented. To show its efficiency, the algorithm is applied to the design of a double freeform mirror system and double freeform lens system.Comment: Copyright 2018 Optical Society of America. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibite

    Single freeform surface design for prescribed input wavefront and target irradiance

    Full text link
    In beam shaping applications, the minimization of the number of necessary optical elements for the beam shaping process can benefit the compactness of the optical system and reduce its cost. The single freeform surface design for input wavefronts, which are neither planar nor spherical, is therefore of interest. In this work, the design of single freeform surfaces for a given zero-\'etendue source and complex target irradiances is investigated. Hence, not only collimated input beams or point sources are assumed. Instead, a predefined input ray direction vector field and irradiance distribution on a source plane, which has to be redistributed by a single freeform surface to give the predefined target irradiance, is considered. To solve this design problem, a partial differential equation (PDE) or PDE system, respectively, for the unknown surface and its corresponding ray mapping is derived from energy conservation and the ray-tracing equations. In contrast to former PDE formulations of the single freeform design problem, the derived PDE of Monge-Amp\`ere type is formulated for general zero-\'etendue sources in cartesian coordinates. The PDE system is discretized with finite differences and the resulting nonlinear equation system solved by a root-finding algorithm. The basis of the efficient solution of the PDE system builds the introduction of an initial iterate constuction approach for a given input direction vector field, which uses optimal mass transport with a quadratic cost function. After a detailed description of the numerical algorithm, the efficiency of the design method is demonstrated by applying it to several design examples. This includes the redistribution of a collimated input beam beyond the paraxial approximation, the shaping of point source radiation and the shaping of an astigmatic input wavefront into a complex target irradiance distribution.Comment: 11 pages, 10 figures version 2: Equation (7) was corrected; additional minor changes/improvement

    Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features

    Get PDF
    During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6–1.3 Hz), words (1.8–3 Hz), syllables (2.8–4.8 Hz), and phonemes (8–12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13–30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory–motor pathway

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    Level-of-detail for cognitive real-time characters

    Get PDF
    We present a solution for the real-time simulation of artificial environments containing cognitive and hierarchically organized agents at constant rendering framerates. We introduce a level-of-detail concept to behavioral modeling, where agents populating the world can be both reactive and proactive. The disposable time per rendered frame for behavioral simulation is variable and determines the complexity of the presented behavior. A special scheduling algorithm distributes this time to the agents depending on their level-of-detail such that visible and nearby agents get more time than invisible or distant agents. This allows for smooth transitions between reactive and proactive behavior. The time available per agent influences the proactive behavior, which becomes more sophisticated because it can spend time anticipating future situations. Additionally, we exploit the use of hierarchies within groups of agents that allow for different levels of control. We show that our approach is well-suited for simulating environments with up to several hundred agents with reasonable response times and the behavior adapts to the current viewpoin

    Irregular speech rate dissociates auditory cortical entrainment, evoked responses, and frontal alpha

    Get PDF
    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features

    Simple acoustic features can explain phoneme-based predictions of cortical responses to speech

    Get PDF
    When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena
    • …
    corecore